1,657 research outputs found
Creative Practice for Classical String Players with Live Looping
In recent years, string pedagogy discussions have highlighted the greater need for creative practice as classical string players. Since the second half of the nineteenth century, string methods have shifted towards a limited scope of improvisatory techniques, parallelling the decline of improvisation in Western classical music performance practices. This thesis explores live looping as a practice tool to facilitate learning concepts and help string players develop musicianship skills including improvisation, participate in non-classical genres, and explore their creative voices. Examining the results of string educators that incorporate live looping into their own teaching reveals the toolβs effectiveness in bridging curricula standards with opportunities for avenues of creativity and endless experimentation. Ultimately, live looping can help string players learn a concept more deeply, employing scaffolding techniques to practice abstract models and thus relying less on any specific example such as learning from sheet music. This encourages a broader musical foundation enabling classical string players to feel more equipped in areas beyond their comfort zones and participate in and enjoy a wider range of musically fulfilling experiences
TADA: Task-Agnostic Dialect Adapters for English
Large Language Models, the dominant starting point for Natural Language
Processing (NLP) applications, fail at a higher rate for speakers of English
dialects other than Standard American English (SAE). Prior work addresses this
using task-specific data or synthetic data augmentation, both of which require
intervention for each dialect and task pair. This poses a scalability issue
that prevents the broad adoption of robust dialectal English NLP. We introduce
a simple yet effective method for task-agnostic dialect adaptation by aligning
non-SAE dialects using adapters and composing them with task-specific adapters
from SAE. Task-Agnostic Dialect Adapters (TADA) improve dialectal robustness on
4 dialectal variants of the GLUE benchmark without task-specific supervision.Comment: 5 Pages; ACL Findings Paper 202
Impressions: Understanding Visual Semiotics and Aesthetic Impact
Is aesthetic impact different from beauty? Is visual salience a reflection of
its capacity for effective communication? We present Impressions, a novel
dataset through which to investigate the semiotics of images, and how specific
visual features and design choices can elicit specific emotions, thoughts and
beliefs. We posit that the impactfulness of an image extends beyond formal
definitions of aesthetics, to its success as a communicative act, where style
contributes as much to meaning formation as the subject matter. However, prior
image captioning datasets are not designed to empower state-of-the-art
architectures to model potential human impressions or interpretations of
images. To fill this gap, we design an annotation task heavily inspired by
image analysis techniques in the Visual Arts to collect 1,440 image-caption
pairs and 4,320 unique annotations exploring impact, pragmatic image
description, impressions, and aesthetic design choices. We show that existing
multimodal image captioning and conditional generation models struggle to
simulate plausible human responses to images. However, this dataset
significantly improves their ability to model impressions and aesthetic
evaluations of images through fine-tuning and few-shot adaptation.Comment: To be published in EMNLP 202
Properly Learning Decision Trees with Queries Is NP-Hard
We prove that it is NP-hard to properly PAC learn decision trees with
queries, resolving a longstanding open problem in learning theory (Bshouty
1993; Guijarro-Lavin-Raghavan 1999; Mehta-Raghavan 2002; Feldman 2016). While
there has been a long line of work, dating back to (Pitt-Valiant 1988),
establishing the hardness of properly learning decision trees from random
examples, the more challenging setting of query learners necessitates different
techniques and there were no previous lower bounds. En route to our main
result, we simplify and strengthen the best known lower bounds for a different
problem of Decision Tree Minimization (Zantema-Bodlaender 2000; Sieling 2003).
On a technical level, we introduce the notion of hardness distillation, which
we study for decision tree complexity but can be considered for any complexity
measure: for a function that requires large decision trees, we give a general
method for identifying a small set of inputs that is responsible for its
complexity. Our technique even rules out query learners that are allowed
constant error. This contrasts with existing lower bounds for the setting of
random examples which only hold for inverse-polynomial error.
Our result, taken together with a recent almost-polynomial time query
algorithm for properly learning decision trees under the uniform distribution
(Blanc-Lange-Qiao-Tan 2022), demonstrates the dramatic impact of distributional
assumptions on the problem.Comment: 41 pages, 10 figures, FOCS 202
Multi-VALUE: A Framework for Cross-Dialectal English NLP
Dialect differences caused by regional, social, and economic factors cause
performance discrepancies for many groups of language technology users.
Inclusive and equitable language technology must critically be dialect
invariant, meaning that performance remains constant over dialectal shifts.
Current systems often fall short of this ideal since they are designed and
tested on a single dialect: Standard American English (SAE). We introduce a
suite of resources for evaluating and achieving English dialect invariance. The
resource is called Multi-VALUE, a controllable rule-based translation system
spanning 50 English dialects and 189 unique linguistic features. Multi-VALUE
maps SAE to synthetic forms of each dialect. First, we use this system to
stress tests question answering, machine translation, and semantic parsing.
Stress tests reveal significant performance disparities for leading models on
non-standard dialects. Second, we use this system as a data augmentation
technique to improve the dialect robustness of existing systems. Finally, we
partner with native speakers of Chicano and Indian English to release new
gold-standard variants of the popular CoQA task. To execute the transformation
code, run model checkpoints, and download both synthetic and gold-standard
dialectal benchmark datasets, see http://value-nlp.org.Comment: ACL 202
A Query-Optimal Algorithm for Finding Counterfactuals
We design an algorithm for finding counterfactuals with strong theoretical
guarantees on its performance. For any monotone model and
instance , our algorithm makes queries to and returns {an {\sl optimal}} counterfactual for
: a nearest instance to for which . Here is the sensitivity of , a discrete analogue of the
Lipschitz constant, and is the distance from to
its nearest counterfactuals. The previous best known query complexity was
, achievable by brute-force local search. We
further prove a lower bound of on the query complexity of any algorithm, thereby showing that the
guarantees of our algorithm are essentially optimal.Comment: 22 pages, ICML 202
A Strong Composition Theorem for Junta Complexity and the Boosting of Property Testers
We prove a strong composition theorem for junta complexity and show how such
theorems can be used to generically boost the performance of property testers.
The -approximate junta complexity of a function is the
smallest integer such that is -close to a function that
depends only on variables. A strong composition theorem states that if
has large -approximate junta complexity, then has even
larger -approximate junta complexity, even for . We develop a fairly complete understanding of this behavior,
proving that the junta complexity of is characterized by that of
along with the multivariate noise sensitivity of . For the important
case of symmetric functions , we relate their multivariate noise sensitivity
to the simpler and well-studied case of univariate noise sensitivity.
We then show how strong composition theorems yield boosting algorithms for
property testers: with a strong composition theorem for any class of functions,
a large-distance tester for that class is immediately upgraded into one for
small distances. Combining our contributions yields a booster for junta
testers, and with it new implications for junta testing. This is the first
boosting-type result in property testing, and we hope that the connection to
composition theorems adds compelling motivation to the study of both topics.Comment: 44 pages, 1 figure, FOCS 202
Certification with an NP Oracle
In the certification problem, the algorithm is given a function with
certificate complexity and an input , and the goal is to find a
certificate of size for 's value at . This
problem is in , and assuming , is not in . Prior works, dating back to Valiant in
1984, have therefore sought to design efficient algorithms by imposing
assumptions on such as monotonicity.
Our first result is a algorithm for the general
problem. The key ingredient is a new notion of the balanced influence of
variables, a natural variant of influence that corrects for the bias of the
function. Balanced influences can be accurately estimated via uniform
generation, and classic algorithms are known for
the latter task.
We then consider certification with stricter instance-wise guarantees: for
each , find a certificate whose size scales with that of the smallest
certificate for . In sharp contrast with our first result, we show
that this problem is -hard even to approximate. We
obtain an optimal inapproximability ratio, adding to a small handful of
problems in the higher levels of the polynomial hierarchy for which optimal
inapproximability is known. Our proof involves the novel use of bit-fixing
dispersers for gap amplification.Comment: 25 pages, 2 figures, ITCS 202
The Expression of irx7 in the Inner Nuclear Layer of Zebrafish Retina Is Essential for a Proper Retinal Development and Lamination.
Irx7, a member in the zebrafish iroquois transcription factor (TF) family, has been shown to control brain patterning. During retinal development, irx7\u27s expression was found to appear exclusively in the inner nuclear layer (INL) as soon as the prospective INL cells withdraw from the cell cycle and during retinal lamination. In Irx7-deficient retinas, the formation of a proper retinal lamination was disrupted and the differentiation of INL cell types, including amacrine, horizontal, bipolar and Muller cells, was compromised. Despite irx7\u27s exclusive expression in the INL, photoreceptors differentiation was also compromised in Irx7-deficient retinas. Compared with other retinal cell types, ganglion cells differentiated relatively well in these retinas, except for their dendritic projections into the inner plexiform layer (IPL). In fact, the neuronal projections of amacrine and bipolar cells into the IPL were also diminished. These indicate that the retinal lamination issue in the Irx7-deficient retinas is likely caused by the attenuation of the neurite outgrowth. Since the expression of known TFs that can specify specific retinal cell type was also altered in Irx7-deficient retinas, thus the irx7 gene network is possibly a novel regulatory circuit for retinal development and lamination
- β¦